首页> 外文OA文献 >Video retrieval based on deep convolutional neural network
【2h】

Video retrieval based on deep convolutional neural network

机译:基于深度卷积神经网络的视频检索

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Recently, with the enormous growth of online videos, fast video retrievalresearch has received increasing attention. As an extension of image hashingtechniques, traditional video hashing methods mainly depend on hand-craftedfeatures and transform the real-valued features into binary hash codes. Asvideos provide far more diverse and complex visual information than images,extracting features from videos is much more challenging than that from images.Therefore, high-level semantic features to represent videos are needed ratherthan low-level hand-crafted methods. In this paper, a deep convolutional neuralnetwork is proposed to extract high-level semantic features and a binary hashfunction is then integrated into this framework to achieve an end-to-endoptimization. Particularly, our approach also combines triplet loss functionwhich preserves the relative similarity and difference of videos andclassification loss function as the optimization objective. Experiments havebeen performed on two public datasets and the results demonstrate thesuperiority of our proposed method compared with other state-of-the-art videoretrieval methods.
机译:近年来,随着在线视频的迅猛发展,快速的视频检索研究日益受到关注。作为图像哈希技术的扩展,传统的视频哈希方法主要依靠手工制作的功能并将实值特征转换为二进制哈希码。由于视频所提供的视觉信息要比图像丰富得多,因此从视频中提取特征比从图像中提取特征更具挑战性。因此,需要高级语义特征来表示视频,而不是低级的手工方法。本文提出了一种深度卷积神经网络来提取高级语义特征,然后将二进制哈希函数集成到该框架中以实现端到端的优化。特别是,我们的方法还结合了三重损失函数,该函数保留了视频的相对相似性和差异性,而分类损失函数则作为优化目标。在两个公共数据集上进行了实验,结果证明了与其他最新的视频检索方法相比,我们提出的方法的优越性。

著录项

  • 作者

    Dong, Yj; Li, JG;

  • 作者单位
  • 年度 2017
  • 总页数
  • 原文格式 PDF
  • 正文语种
  • 中图分类

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号